# Retrieval Optimization
Speed Embedding 7b Instruct
MIT
Speed Embedding 7B Instruct is a large-scale language model based on the Transformer architecture, focusing on text embedding and classification tasks, and has demonstrated outstanding performance in multiple benchmarks.
Large Language Model
Transformers English

S
Haon-Chen
37
5
Noinstruct Small Embedding V0
MIT
NoInstruct Small Embedding Model v0 is an improved embedding model focused on enhancing retrieval task performance while maintaining independence from arbitrary instruction encoding.
Text Embedding
Transformers English

N
avsolatorio
90.76k
22
Fio Base Japanese V0.1
The first version of the Fio series Japanese embedding model, based on BERT architecture, focusing on Japanese text similarity and feature extraction tasks
Text Embedding
Transformers Japanese

F
bclavie
79
7
E5 Large En Ru
MIT
This is a vocabulary-pruned version of the intfloat/multilingual-e5-large model, retaining only Russian and English tokens while maintaining the original model's performance.
Text Embedding
Transformers Supports Multiple Languages

E
d0rj
712
9
Featured Recommended AI Models